In cryptography, Kerckhoffs's principle (also called Kerckhoffs's Desiderata, Kerckhoffs's assumption, axiom, or law) was stated by Auguste Kerckhoffs in the 19th century: A cryptosystem should be secure even if everything about the system, except the key, is public knowledge.
Kerckhoffs's principle was reformulated (perhaps independently) by Claude Shannon as "The enemy knows the system." In that form, it is called Shannon's maxim. In contrast to "security through obscurity," it is widely embraced by cryptographers.
Contents |
In 1883 Auguste Kerckhoffs[1] wrote two journal articles on La Cryptographie Militaire,[2] in which he stated six design principles for military ciphers. Translated from French, they are:[3]
Some are no longer relevant given the ability of computers to perform complex encryption, but his second axiom, now known as Kerckhoffs's principle, is still critically important.
Stated simply, the security of a cryptosystem should depend solely on the secrecy of the key and the private randomizer.[4] Another way of putting it is that a method of secretly coding and transmitting information should be secure even if everyone knows how it works. Of course, despite the attacker's familiarity with the system in question, the attacker lacks knowledge as to which of all possible instances is being presently observed.
Using secure cryptography is supposed to replace the difficult problem of keeping messages secure with a much more manageable one, keeping relatively small keys secure. A system that requires long-term secrecy for something as large and complex as the whole design of a cryptographic system obviously cannot achieve that goal. It only replaces one hard problem with another. However, if a system is secure even when the enemy knows everything except the key, then all that is needed is to manage keeping the keys secret.
There are a large number of ways the internal details of a widely used system could be discovered. The most obvious is that someone could bribe, blackmail or otherwise threaten staff or customers into explaining the system. In war, for example, one side will probably capture some equipment and people from the other side. Each side will also use spies to gather information.
If a method involves software, someone could do memory dumps or run the software under the control of a debugger in order to understand the method. If hardware is being used, someone could buy or steal some of the hardware and build whatever programs or gadgets needed to test it. Hardware can also be dismantled so that the chip details can be seen with microscopes.
A generalization some make from Kerckhoffs's principle is, "The fewer and simpler the secrets that one must keep to ensure system security, the easier it is to maintain system security." Bruce Schneier ties it in with a belief that all security systems must be designed to fail as gracefully as possible:
Kerckhoffs's principle applies beyond codes and ciphers to security systems in general: every secret creates a potential failure point. Secrecy, in other words, is a prime cause of brittleness—and therefore something likely to make a system prone to catastrophic collapse. Conversely, openness provides ductility.[5]
Any security system depends crucially on keeping some things secret. However, Kerckhoffs's principle points out that the things kept secret ought to be those least costly to change if inadvertently disclosed.
For example, a cryptographic algorithm may be implemented by hardware and software that is widely distributed among users. If security depends on keeping that secret, then disclosure leads to major logistic difficulties in developing, testing, and distributing implementations of a new algorithm: it is "brittle." On the other hand, if keeping the algorithm secret is not important, but only the keys used with the algorithm must be secret, then disclosure of the keys simply requires the simpler, less costly process of generating and distributing new keys.
In accordance with Kerckhoffs's principle, the majority of civilian cryptography makes use of publicly-known algorithms. By contrast, ciphers used to protect classified government or military information are often kept secret (see Type 1 encryption). However, it should not be assumed that government/military ciphers must be kept secret to maintain security. It's possible that they are intended to be as cryptographically sound as public algorithms, and the decision to keep them secret is in keeping with a layered security posture.
Eric Raymond extends this principle in support of open source security software, saying, "Any security software design that doesn't assume the enemy possesses the source code is already untrustworthy; therefore, never trust closed source."[6]
For purposes of analysing ciphers, Kerckhoffs's principle neatly divides any design into two components. The key can be assumed to be secret for purposes of analysis; in practice various measures are taken to protect it. Everything else is assumed to be knowable by the opponent, so everything except the key should be revealed to the analyst. Perhaps not all opponents know everything, but the analyst should because the goal is to create a system that is secure against any enemy except one that learns the key.
John Savard describes the widespread acceptance of this idea:
That the security of a cipher system should depend on the key and not the algorithm has become a truism in the computer era, and this one is the best-remembered of Kerckhoffs's dicta. ... Unlike a key, an algorithm can be studied and analyzed by experts to determine if it is likely to be secure. An algorithm that you have invented yourself and kept secret has not had the opportunity for such review.[7]
It is moderately common for companies and sometimes even standards bodies as in the case of the CSS encryption on DVDs – to keep the inner workings of a system secret. Some argue this "security by obscurity" makes the product safer and less vulnerable to attack. A counter argument is that keeping the innards secret may improve security in the short term, but in the long run only systems that have been published and analyzed should be trusted.
Steve Bellovin commented:
The subject of security through obscurity comes up frequently. I think a lot of the debate happens because people misunderstand the issue.
It helps, I think, to go back to Kerckhoffs's second principle, translated as "The system must not require secrecy and can be stolen by the enemy without causing trouble," per http://petitcolas.net/fabien/kerckhoffs/). Kerckhoffs said neither "publish everything" nor "keep everything secret"; rather, he said that the system should still be secure *even if the enemy has a copy*.
In other words – design your system assuming that your opponents know it in detail. (A former official at NSA's National Computer Security Center told me that the standard assumption there was that serial number 1 of any new device was delivered to the Kremlin.) After that, though, there's nothing wrong with trying to keep it secret – it's another hurdle factor the enemy has to overcome. (One obstacle the British ran into when attacking the German Enigma system was simple: they didn't know the unkeyed mapping between keyboard keys and the input to the rotor array.) But – *don't rely on secrecy*.[8]